Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: splunk output plugin correct record accessor key for hec_token #8793

Merged
merged 1 commit into from
May 15, 2024

Conversation

mannbiher
Copy link
Contributor

@mannbiher mannbiher commented May 3, 2024

In Splunk output plugin fix record accessor key for hec_token. Record accessor key should start with $ else flb_ra_translate function simply returns the key as value. flb_ra_translate returns an empty string if key is not found which would pass if (hec_token) check, so we need to use flb_ra_translate_check function to return NULL if key is not found.

At this function

static flb_sds_t extract_hec_token(struct flb_splunk *ctx, msgpack_object map,
char *tag, int tag_len)
{
flb_sds_t hec_token;
/* Extract HEC token (map which is from metadata lookup) */
if (ctx->event_sourcetype_key) {
hec_token = flb_ra_translate(ctx->ra_metadata_auth_key, tag, tag_len,
map, NULL);
if (hec_token) {
return hec_token;
}
flb_plg_debug(ctx->ins, "Could not find hec_token in metadata");
return NULL;
}
flb_plg_debug(ctx->ins, "Could not find a record accessor definition of hec_token");
return NULL;
}
I am not sure if we really want to check for if (ctx->event_sourcetype_key) existence before searching for hec_token key. Either we should check for metadata_auth_key in which case if statement is redundant as metadata_auth_key is always defined.
ctx->metadata_auth_key = "hec_token";

If it is meant to be event_key as the original pull request shows in example, I can change it and commit.
#8738

Fixes #8781

Enter [N/A] in the box, if an item is not applicable to your change.

Testing
Before we can approve your change; please submit the following in a comment:

  • Example configuration file for the change
  • Debug log output from testing the change
  1. Testing the fix to fluent-bit 3.0.3 upgrade has broken splunk output plugin when event_sourcetype_key is specified #8781
# start fluent-bit with splunk input plugin to ingest splunk events in one terminal
./bin/fluent-bit  -i splunk -p port=8081 -o stdout -vv
# start fluent-bit with splunk output plugin with splunk_token and event_source_key parameters
./bin/fluent-bit -i dummy -p "samples=1" -o splunk -p port=8081 -psplunk_token=db496524-e7e6-4ae9-b3f0-2287d8e65cd4 -p 'event_sourcetype_key=sourcetype' -vv

logs from splunk output plugin process

Fluent Bit v3.0.4
* Copyright (C) 2015-2024 The Fluent Bit Authors
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

___________.__                        __    __________.__  __          ________
\_   _____/|  |  __ __   ____   _____/  |_  \______   \__|/  |_  ___  _\_____  \
 |    __)  |  | |  |  \_/ __ \ /    \   __\  |    |  _/  \   __\ \  \/ / _(__  <
 |     \   |  |_|  |  /\  ___/|   |  \  |    |    |   \  ||  |    \   / /       \
 \___  /   |____/____/  \___  >___|  /__|    |______  /__||__|     \_/ /______  /
     \/                     \/     \/               \/                        \/

[2024/05/03 09:59:31] [ info] Configuration:
[2024/05/03 09:59:31] [ info]  flush time     | 1.000000 seconds
[2024/05/03 09:59:31] [ info]  grace          | 5 seconds
[2024/05/03 09:59:31] [ info]  daemon         | 0
[2024/05/03 09:59:31] [ info] ___________
[2024/05/03 09:59:31] [ info]  inputs:
[2024/05/03 09:59:31] [ info]      dummy
[2024/05/03 09:59:31] [ info] ___________
[2024/05/03 09:59:31] [ info]  filters:
[2024/05/03 09:59:31] [ info] ___________
[2024/05/03 09:59:31] [ info]  outputs:
[2024/05/03 09:59:31] [ info]      splunk.0
[2024/05/03 09:59:31] [ info] ___________
[2024/05/03 09:59:31] [ info]  collectors:
[2024/05/03 09:59:31] [ info] [fluent bit] version=3.0.4, commit=9bacb0ac41, pid=852478
[2024/05/03 09:59:31] [debug] [engine] coroutine stack size: 24576 bytes (24.0K)
[2024/05/03 09:59:31] [ info] [storage] ver=1.5.2, type=memory, sync=normal, checksum=off, max_chunks_up=128
[2024/05/03 09:59:31] [ info] [cmetrics] version=0.9.0
[2024/05/03 09:59:31] [ info] [ctraces ] version=0.5.1
[2024/05/03 09:59:31] [ info] [input:dummy:dummy.0] initializing
[2024/05/03 09:59:31] [ info] [input:dummy:dummy.0] storage_strategy='memory' (memory only)
[2024/05/03 09:59:31] [debug] [dummy:dummy.0] created event channels: read=21 write=22
[2024/05/03 09:59:31] [debug] [splunk:splunk.0] created event channels: read=23 write=24
[2024/05/03 09:59:31] [ info] [output:splunk:splunk.0] worker #0 started
[2024/05/03 09:59:31] [ info] [sp] stream processor started
[2024/05/03 09:59:31] [ info] [output:splunk:splunk.0] worker #1 started
[2024/05/03 09:59:31] [trace] [input chunk] update output instances with new chunk size diff=36, records=1, input=dummy.0
[2024/05/03 09:59:32] [trace] [task 0x7fd07c0220b0] created (id=0)
[2024/05/03 09:59:32] [debug] [task] created task=0x7fd07c0220b0 id=0 OK
[2024/05/03 09:59:32] [debug] [output:splunk:splunk.0] task_id=0 assigned to thread #0
[2024/05/03 09:59:32] [trace] [upstream] get new connection for 127.0.0.1:8081, net setup:
net.connect_timeout        = 10 seconds
net.source_address         = any
net.keepalive              = enabled
net.keepalive_idle_timeout = 30 seconds
net.max_worker_connections = 0
[2024/05/03 09:59:32] [trace] [net] connection #49 in process to 127.0.0.1:8081
[2024/05/03 09:59:32] [trace] [engine] resuming coroutine=0x7fd074001050
[2024/05/03 09:59:32] [trace] [io] connection OK
[2024/05/03 09:59:32] [debug] [upstream] KA connection #49 to 127.0.0.1:8081 is connected
[2024/05/03 09:59:32] [ warn] [record accessor] translation failed, root key=hec_token
[2024/05/03 09:59:32] [debug] [output:splunk:splunk.0] Could not find hec_token in metadata
[2024/05/03 09:59:32] [debug] [http_client] not using http_proxy for header
[2024/05/03 09:59:32] [trace] [io coro=0x7fd074001050] [net_write] trying 193 bytes
[2024/05/03 09:59:32] [trace] [io coro=0x7fd074001050] [fd 49] write_async(2)=193 (193/193)
[2024/05/03 09:59:32] [trace] [io coro=0x7fd074001050] [net_write] ret=193 total=193/193
[2024/05/03 09:59:32] [trace] [io coro=0x7fd074001050] [net_write] trying 80 bytes
[2024/05/03 09:59:32] [trace] [io coro=0x7fd074001050] [fd 49] write_async(2)=80 (80/80)
[2024/05/03 09:59:32] [trace] [io coro=0x7fd074001050] [net_write] ret=80 total=80/80
[2024/05/03 09:59:32] [trace] [io coro=0x7fd074001050] [net_read] try up to 4095 bytes
[2024/05/03 09:59:32] [trace] [engine] resuming coroutine=0x7fd074001050
[2024/05/03 09:59:32] [trace] [io coro=0x7fd074001050] [net_read] ret=78
[2024/05/03 09:59:32] [debug] [upstream] KA connection #49 to 127.0.0.1:8081 is now available
[2024/05/03 09:59:32] [debug] [out flush] cb_destroy coro_id=0
[2024/05/03 09:59:32] [trace] [coro] destroy coroutine=0x7fd074001050 data=0x7fd074001070
[2024/05/03 09:59:32] [trace] [engine] [task event] task_id=0 out_id=0 return=OK
[2024/05/03 09:59:32] [debug] [task] destroy task=0x7fd07c0220b0 (task_id=0)
[2024/05/03 10:00:03] [trace] [upstream] destroy connection #49 to 127.0.0.1:8081
[2024/05/03 10:00:03] [debug] [upstream] drop keepalive connection #-1 to 127.0.0.1:8081 (keepalive idle timeout)

logs from fluent-bit splunk input plugin process. It shows that splunk token is respected and not overridden.

Fluent Bit v3.0.4
* Copyright (C) 2015-2024 The Fluent Bit Authors
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

___________.__                        __    __________.__  __          ________
\_   _____/|  |  __ __   ____   _____/  |_  \______   \__|/  |_  ___  _\_____  \
 |    __)  |  | |  |  \_/ __ \ /    \   __\  |    |  _/  \   __\ \  \/ / _(__  <
 |     \   |  |_|  |  /\  ___/|   |  \  |    |    |   \  ||  |    \   / /       \
 \___  /   |____/____/  \___  >___|  /__|    |______  /__||__|     \_/ /______  /
     \/                     \/     \/               \/                        \/

[2024/05/03 09:59:22] [ info] Configuration:
[2024/05/03 09:59:22] [ info]  flush time     | 1.000000 seconds
[2024/05/03 09:59:22] [ info]  grace          | 5 seconds
[2024/05/03 09:59:22] [ info]  daemon         | 0
[2024/05/03 09:59:22] [ info] ___________
[2024/05/03 09:59:22] [ info]  inputs:
[2024/05/03 09:59:22] [ info]      splunk
[2024/05/03 09:59:22] [ info] ___________
[2024/05/03 09:59:22] [ info]  filters:
[2024/05/03 09:59:22] [ info] ___________
[2024/05/03 09:59:22] [ info]  outputs:
[2024/05/03 09:59:22] [ info]      stdout.0
[2024/05/03 09:59:22] [ info] ___________
[2024/05/03 09:59:22] [ info]  collectors:
[2024/05/03 09:59:22] [ info] [fluent bit] version=3.0.4, commit=9bacb0ac41, pid=852468
[2024/05/03 09:59:22] [debug] [engine] coroutine stack size: 24576 bytes (24.0K)
[2024/05/03 09:59:22] [ info] [storage] ver=1.5.2, type=memory, sync=normal, checksum=off, max_chunks_up=128
[2024/05/03 09:59:22] [ info] [cmetrics] version=0.9.0
[2024/05/03 09:59:22] [ info] [ctraces ] version=0.5.1
[2024/05/03 09:59:22] [ info] [input:splunk:splunk.0] initializing
[2024/05/03 09:59:22] [ info] [input:splunk:splunk.0] storage_strategy='memory' (memory only)
[2024/05/03 09:59:22] [debug] [splunk:splunk.0] created event channels: read=21 write=22
[2024/05/03 09:59:22] [debug] [downstream] listening on 0.0.0.0:8081
[2024/05/03 09:59:22] [debug] [stdout:stdout.0] created event channels: read=24 write=25
[2024/05/03 09:59:22] [ info] [sp] stream processor started
[2024/05/03 09:59:22] [ info] [output:stdout:stdout.0] worker #0 started
[2024/05/03 09:59:32] [trace] [io] connection OK
[2024/05/03 09:59:32] [trace] [io coro=(nil)] [net_read] try up to 1024 bytes
[2024/05/03 09:59:32] [trace] [io coro=(nil)] [net_read] ret=273
[2024/05/03 09:59:32] [trace] [input chunk] update output instances with new chunk size diff=138, records=1, input=splunk.0
[2024/05/03 09:59:32] [trace] [io coro=(nil)] [net_write] trying 78 bytes
[2024/05/03 09:59:32] [trace] [io coro=(nil)] [net_write] ret=78 total=78/78
[2024/05/03 09:59:33] [trace] [task 0x7f1bf4025480] created (id=0)
[2024/05/03 09:59:33] [debug] [task] created task=0x7f1bf4025480 id=0 OK
[2024/05/03 09:59:33] [debug] [output:stdout:stdout.0] task_id=0 assigned to thread #0
[0] splunk.0: [[1714755572.940177337, {"hec_token"=>"Splunk db496524-e7e6-4ae9-b3f0-2287d8e65cd4"}], {"time"=>1714755571.937907, "sourcetype"=>"sourcetype", "event"=>{"message"=>"dummy"}}]
[2024/05/03 09:59:33] [debug] [out flush] cb_destroy coro_id=0
[2024/05/03 09:59:33] [trace] [coro] destroy coroutine=0x7f1bec001050 data=0x7f1bec001070
[2024/05/03 09:59:33] [trace] [engine] [task event] task_id=0 out_id=0 return=OK
[2024/05/03 09:59:33] [debug] [task] destroy task=0x7f1bf4025480 (task_id=0)
[2024/05/03 10:00:03] [trace] [io coro=(nil)] [net_read] try up to 1024 bytes
[2024/05/03 10:00:03] [trace] [io coro=(nil)] [net_read] ret=0
[2024/05/03 10:00:03] [trace] [downstream] destroy connection #40 to tcp://127.0.0.1:42012
  • Attached Valgrind output that shows no leaks or memory corruption was found

Valgrind fluent-bit splunk input plugin

valgrind ./bin/fluent-bit  -i splunk -p port=8081 -o stdout -vv
==854842== Memcheck, a memory error detector
==854842== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==854842== Using Valgrind-3.18.1 and LibVEX; rerun with -h for copyright info
==854842== Command: ./bin/fluent-bit -i splunk -p port=8081 -o stdout -vv
==854842==
Fluent Bit v3.0.4
* Copyright (C) 2015-2024 The Fluent Bit Authors
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

___________.__                        __    __________.__  __          ________
\_   _____/|  |  __ __   ____   _____/  |_  \______   \__|/  |_  ___  _\_____  \
 |    __)  |  | |  |  \_/ __ \ /    \   __\  |    |  _/  \   __\ \  \/ / _(__  <
 |     \   |  |_|  |  /\  ___/|   |  \  |    |    |   \  ||  |    \   / /       \
 \___  /   |____/____/  \___  >___|  /__|    |______  /__||__|     \_/ /______  /
     \/                     \/     \/               \/                        \/

[2024/05/03 10:28:42] [ info] Configuration:
[2024/05/03 10:28:42] [ info]  flush time     | 1.000000 seconds
[2024/05/03 10:28:42] [ info]  grace          | 5 seconds
[2024/05/03 10:28:42] [ info]  daemon         | 0
[2024/05/03 10:28:42] [ info] ___________
[2024/05/03 10:28:42] [ info]  inputs:
[2024/05/03 10:28:42] [ info]      splunk
[2024/05/03 10:28:42] [ info] ___________
[2024/05/03 10:28:42] [ info]  filters:
[2024/05/03 10:28:42] [ info] ___________
[2024/05/03 10:28:42] [ info]  outputs:
[2024/05/03 10:28:42] [ info]      stdout.0
[2024/05/03 10:28:42] [ info] ___________
[2024/05/03 10:28:42] [ info]  collectors:
[2024/05/03 10:28:42] [ info] [fluent bit] version=3.0.4, commit=9bacb0ac41, pid=854842
[2024/05/03 10:28:42] [debug] [engine] coroutine stack size: 24576 bytes (24.0K)
[2024/05/03 10:28:42] [ info] [output:stdout:stdout.0] worker #0 started
[2024/05/03 10:28:42] [ info] [storage] ver=1.5.2, type=memory, sync=normal, checksum=off, max_chunks_up=128
[2024/05/03 10:28:42] [ info] [cmetrics] version=0.9.0
[2024/05/03 10:28:42] [ info] [ctraces ] version=0.5.1
[2024/05/03 10:28:42] [ info] [input:splunk:splunk.0] initializing
[2024/05/03 10:28:42] [ info] [input:splunk:splunk.0] storage_strategy='memory' (memory only)
[2024/05/03 10:28:42] [debug] [splunk:splunk.0] created event channels: read=21 write=22
[2024/05/03 10:28:42] [debug] [downstream] listening on 0.0.0.0:8081
[2024/05/03 10:28:42] [debug] [stdout:stdout.0] created event channels: read=24 write=25
[2024/05/03 10:28:42] [ info] [sp] stream processor started
[2024/05/03 10:28:54] [trace] [io] connection OK
[2024/05/03 10:28:54] [trace] [io coro=(nil)] [net_read] try up to 1024 bytes
[2024/05/03 10:28:54] [trace] [io coro=(nil)] [net_read] ret=193
[2024/05/03 10:28:54] [trace] [io coro=(nil)] [net_read] try up to 1024 bytes
[2024/05/03 10:28:54] [trace] [io coro=(nil)] [net_read] ret=80
[2024/05/03 10:28:54] [trace] [input chunk] update output instances with new chunk size diff=138, records=1, input=splunk.0
[2024/05/03 10:28:54] [trace] [io coro=(nil)] [net_write] trying 78 bytes
[2024/05/03 10:28:54] [trace] [io coro=(nil)] [net_write] ret=78 total=78/78
[2024/05/03 10:28:54] [trace] [task 0x51e3320] created (id=0)
[0] splunk.0: [[1714757334.333953458, {"hec_token"=>"Splunk db496524-e7e6-4ae9-b3f0-2287d8e65cd4"}], {"time"=>1714757332.991472, "sourcetype"=>"sourcetype", "event"=>{"message"=>"dummy"}}]
[2024/05/03 10:28:54] [debug] [task] created task=0x51e3320 id=0 OK
[2024/05/03 10:28:54] [debug] [output:stdout:stdout.0] task_id=0 assigned to thread #0
[2024/05/03 10:28:54] [debug] [out flush] cb_destroy coro_id=0
[2024/05/03 10:28:54] [trace] [engine] [task event] task_id=0 out_id=0 return=OK
[2024/05/03 10:28:54] [trace] [coro] destroy coroutine=0x51e9680 data=0x51e96a0
[2024/05/03 10:28:54] [debug] [task] destroy task=0x51e3320 (task_id=0)
[2024/05/03 10:28:59] [trace] [io coro=(nil)] [net_read] try up to 1024 bytes
[2024/05/03 10:28:59] [trace] [io coro=(nil)] [net_read] ret=0
[2024/05/03 10:28:59] [trace] [downstream] destroy connection #40 to tcp://127.0.0.1:44364
^C[2024/05/03 10:29:03] [engine] caught signal (SIGINT)
[2024/05/03 10:29:03] [trace] [engine] flush enqueued data
[2024/05/03 10:29:03] [ warn] [engine] service will shutdown in max 5 seconds
[2024/05/03 10:29:03] [ info] [input] pausing splunk.0
[2024/05/03 10:29:03] [ info] [engine] service has stopped (0 pending tasks)
[2024/05/03 10:29:03] [ info] [input] pausing splunk.0
[2024/05/03 10:29:03] [ info] [output:stdout:stdout.0] thread worker #0 stopping...
[2024/05/03 10:29:03] [ info] [output:stdout:stdout.0] thread worker #0 stopped
==854842==
==854842== HEAP SUMMARY:
==854842==     in use at exit: 0 bytes in 0 blocks
==854842==   total heap usage: 1,728 allocs, 1,728 frees, 1,226,439 bytes allocated
==854842==
==854842== All heap blocks were freed -- no leaks are possible
==854842==
==854842== For lists of detected and suppressed errors, rerun with: -s
==854842== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)```

Valgrind fluent-bit splunk output plugin

valgrind ./bin/fluent-bit -i dummy -p "samples=1" -o splunk -p port=8081 -psplunk_token=db496524-e7e6-4ae9-b3f0
-2287d8e65cd4 -p 'event_sourcetype_key=sourcetype' -vv
==854864== Memcheck, a memory error detector
==854864== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==854864== Using Valgrind-3.18.1 and LibVEX; rerun with -h for copyright info
==854864== Command: ./bin/fluent-bit -i dummy -p samples=1 -o splunk -p port=8081 -psplunk_token=db496524-e7e6-4ae9-b3f0-2287d8e65cd4 -p event_sourcetype_key=sourcetype -vv
==854864==
Fluent Bit v3.0.4
* Copyright (C) 2015-2024 The Fluent Bit Authors
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

___________.__                        __    __________.__  __          ________
\_   _____/|  |  __ __   ____   _____/  |_  \______   \__|/  |_  ___  _\_____  \
 |    __)  |  | |  |  \_/ __ \ /    \   __\  |    |  _/  \   __\ \  \/ / _(__  <
 |     \   |  |_|  |  /\  ___/|   |  \  |    |    |   \  ||  |    \   / /       \
 \___  /   |____/____/  \___  >___|  /__|    |______  /__||__|     \_/ /______  /
     \/                     \/     \/               \/                        \/

[2024/05/03 10:28:51] [ info] Configuration:
[2024/05/03 10:28:51] [ info]  flush time     | 1.000000 seconds
[2024/05/03 10:28:51] [ info]  grace          | 5 seconds
[2024/05/03 10:28:51] [ info]  daemon         | 0
[2024/05/03 10:28:51] [ info] ___________
[2024/05/03 10:28:51] [ info]  inputs:
[2024/05/03 10:28:51] [ info]      dummy
[2024/05/03 10:28:51] [ info] ___________
[2024/05/03 10:28:51] [ info]  filters:
[2024/05/03 10:28:51] [ info] ___________
[2024/05/03 10:28:51] [ info]  outputs:
[2024/05/03 10:28:51] [ info]      splunk.0
[2024/05/03 10:28:51] [ info] ___________
[2024/05/03 10:28:51] [ info]  collectors:
[2024/05/03 10:28:51] [ info] [fluent bit] version=3.0.4, commit=9bacb0ac41, pid=854864
[2024/05/03 10:28:51] [debug] [engine] coroutine stack size: 24576 bytes (24.0K)
[2024/05/03 10:28:51] [ info] [output:splunk:splunk.0] worker #0 started
[2024/05/03 10:28:51] [ info] [storage] ver=1.5.2, type=memory, sync=normal, checksum=off, max_chunks_up=128
[2024/05/03 10:28:52] [ info] [output:splunk:splunk.0] worker #1 started
[2024/05/03 10:28:51] [ info] [cmetrics] version=0.9.0
[2024/05/03 10:28:51] [ info] [ctraces ] version=0.5.1
[2024/05/03 10:28:51] [ info] [input:dummy:dummy.0] initializing
[2024/05/03 10:28:51] [ info] [input:dummy:dummy.0] storage_strategy='memory' (memory only)
[2024/05/03 10:28:51] [debug] [dummy:dummy.0] created event channels: read=21 write=22
[2024/05/03 10:28:51] [debug] [splunk:splunk.0] created event channels: read=23 write=24
[2024/05/03 10:28:52] [ info] [sp] stream processor started
[2024/05/03 10:28:53] [trace] [input chunk] update output instances with new chunk size diff=36, records=1, input=dummy.0
[2024/05/03 10:28:53] [trace] [task 0x519f350] created (id=0)
[2024/05/03 10:28:53] [debug] [task] created task=0x519f350 id=0 OK
[2024/05/03 10:28:53] [trace] [upstream] get new connection for 127.0.0.1:8081, net setup:
net.connect_timeout        = 10 seconds
net.source_address         = any
net.keepalive              = enabled
net.keepalive_idle_timeout = 30 seconds
net.max_worker_connections = 0
[2024/05/03 10:28:53] [debug] [output:splunk:splunk.0] task_id=0 assigned to thread #0
[2024/05/03 10:28:54] [trace] [net] connection #49 in process to 127.0.0.1:8081
[2024/05/03 10:28:54] [trace] [engine] resuming coroutine=0x519f5f0
[2024/05/03 10:28:54] [trace] [io] connection OK
[2024/05/03 10:28:54] [debug] [upstream] KA connection #49 to 127.0.0.1:8081 is connected
[2024/05/03 10:28:54] [ warn] [record accessor] translation failed, root key=hec_token
[2024/05/03 10:28:54] [debug] [output:splunk:splunk.0] Could not find hec_token in metadata
[2024/05/03 10:28:54] [debug] [http_client] not using http_proxy for header
[2024/05/03 10:28:54] [trace] [io coro=0x519f5f0] [net_write] trying 193 bytes
[2024/05/03 10:28:54] [trace] [io coro=0x519f5f0] [fd 49] write_async(2)=193 (193/193)
[2024/05/03 10:28:54] [trace] [io coro=0x519f5f0] [net_write] ret=193 total=193/193
[2024/05/03 10:28:54] [trace] [io coro=0x519f5f0] [net_write] trying 80 bytes
[2024/05/03 10:28:54] [trace] [io coro=0x519f5f0] [fd 49] write_async(2)=80 (80/80)
[2024/05/03 10:28:54] [trace] [io coro=0x519f5f0] [net_write] ret=80 total=80/80
[2024/05/03 10:28:54] [trace] [io coro=0x519f5f0] [net_read] try up to 4095 bytes
[2024/05/03 10:28:54] [trace] [engine] resuming coroutine=0x519f5f0
[2024/05/03 10:28:54] [trace] [io coro=0x519f5f0] [net_read] ret=78
[2024/05/03 10:28:54] [trace] [engine] [task event] task_id=0 out_id=0 return=OK
[2024/05/03 10:28:54] [debug] [upstream] KA connection #49 to 127.0.0.1:8081 is now available
[2024/05/03 10:28:54] [debug] [task] destroy task=0x519f350 (task_id=0)
[2024/05/03 10:28:54] [debug] [out flush] cb_destroy coro_id=0
[2024/05/03 10:28:54] [trace] [coro] destroy coroutine=0x519f5f0 data=0x519f610
^C[2024/05/03 10:28:59] [engine] caught signal (SIGINT)
[2024/05/03 10:28:59] [trace] [engine] flush enqueued data
[2024/05/03 10:28:59] [ warn] [engine] service will shutdown in max 5 seconds
[2024/05/03 10:28:59] [ info] [input] pausing dummy.0
[2024/05/03 10:28:59] [ info] [engine] service has stopped (0 pending tasks)
[2024/05/03 10:28:59] [ info] [input] pausing dummy.0
[2024/05/03 10:28:59] [ info] [output:splunk:splunk.0] thread worker #0 stopping...
[2024/05/03 10:28:59] [trace] [upstream] destroy connection #49 to 127.0.0.1:8081
[2024/05/03 10:28:59] [ info] [output:splunk:splunk.0] thread worker #0 stopped
[2024/05/03 10:28:59] [ info] [output:splunk:splunk.0] thread worker #1 stopping...
[2024/05/03 10:28:59] [ info] [output:splunk:splunk.0] thread worker #1 stopped
==854864==
==854864== HEAP SUMMARY:
==854864==     in use at exit: 0 bytes in 0 blocks
==854864==   total heap usage: 1,900 allocs, 1,900 frees, 914,346 bytes allocated
==854864==
==854864== All heap blocks were freed -- no leaks are possible
==854864==
==854864== For lists of detected and suppressed errors, rerun with: -s
==854864== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)

If this is a change to packaging of containers or native binaries then please confirm it work.

  • [N/A] Run local packaging test showing all targets (including any new ones) build.
  • [N/A] Set ok-package-test label to test for all targets (requires maintainer to do).

Documentation

  • [N/A] Documentation required for this feature.
    This is a bug fix

Backporting

  • [N/A] Backport to latest stable release.

Fluent Bit is licensed under Apache 2.0, by submitting this pull request I understand that this code will be released under the terms of that license.

Copy link
Contributor

@cosmo0920 cosmo0920 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I've overlooked this glitch. Thank you for tackling and fixing this! 👍

@edsiper edsiper merged commit 9aab2eb into fluent:master May 15, 2024
80 of 81 checks passed
@edsiper
Copy link
Member

edsiper commented May 15, 2024

thanks for your contribution. Note that commits must be prefixed wi the interface name according to https://github.com/fluent/fluent-bit/blob/master/CONTRIBUTING.md#commit-changes

@edsiper edsiper added this to the Fluent Bit v3.0.4 milestone May 15, 2024
edsiper added a commit that referenced this pull request May 23, 2024
The following patch perform 2 changes in the code that helps to fix the
problems found with Splunk hec token handling:

1. In the recent PR #8793, when using the record accessor API flb_ra_translate_check()
   to validate if the hec_token field exists, leads to noisy log messages since
   that function warns the issue if the field is not found. Most of users are not
   using hec_token set by Splunk input plugin, so their logging gets noisy.

   This patch replaces that call with flb_ra_translate() which fixes the problem.

2. If hec_token was set in the record metadata, it was being store in the main
   context of the plugin, however the flush callbacks that formats and deliver the
   data runs in separate/parallel threads that could lead to a race condition if
   more than onen thread tries to manipulate the value.

   This patch adds protection to the context value so it becomes thread safe.

Signed-off-by: Eduardo Silva <[email protected]>
edsiper added a commit that referenced this pull request May 23, 2024
The following patch perform 2 changes in the code that helps to fix the
problems found with Splunk hec token handling:

1. In the recent PR #8793, when using the record accessor API flb_ra_translate_check()
   to validate if the hec_token field exists, leads to noisy log messages since
   that function warns the issue if the field is not found. Most of users are not
   using hec_token set by Splunk input plugin, so their logging gets noisy.

   This patch replaces that call with flb_ra_translate() which fixes the problem.

2. If hec_token was set in the record metadata, it was being store in the main
   context of the plugin, however the flush callbacks that formats and deliver the
   data runs in separate/parallel threads that could lead to a race condition if
   more than onen thread tries to manipulate the value.

   This patch adds protection to the context value so it becomes thread safe.

Signed-off-by: Eduardo Silva <[email protected]>
markuman pushed a commit to markuman/fluent-bit that referenced this pull request May 29, 2024
Signed-off-by: Maneesh Singh <[email protected]>
Signed-off-by: Markus Bergholz <[email protected]>
markuman pushed a commit to markuman/fluent-bit that referenced this pull request May 29, 2024
The following patch perform 2 changes in the code that helps to fix the
problems found with Splunk hec token handling:

1. In the recent PR fluent#8793, when using the record accessor API flb_ra_translate_check()
   to validate if the hec_token field exists, leads to noisy log messages since
   that function warns the issue if the field is not found. Most of users are not
   using hec_token set by Splunk input plugin, so their logging gets noisy.

   This patch replaces that call with flb_ra_translate() which fixes the problem.

2. If hec_token was set in the record metadata, it was being store in the main
   context of the plugin, however the flush callbacks that formats and deliver the
   data runs in separate/parallel threads that could lead to a race condition if
   more than onen thread tries to manipulate the value.

   This patch adds protection to the context value so it becomes thread safe.

Signed-off-by: Eduardo Silva <[email protected]>
Signed-off-by: Markus Bergholz <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

fluent-bit 3.0.3 upgrade has broken splunk output plugin when event_sourcetype_key is specified
3 participants